Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
神经辐射场(NERF)在代表具有高分辨率细节和有效记忆的复杂3D场景方面取得了巨大成功。然而,当前基于NERF的姿势估计量没有初始姿势预测,并且在优化过程中易于局部优势。在本文中,我们介绍了纬度:全球定位,具有截短的动态低通滤波器,该过滤器引入了城市规模的NERF中的两阶段定位机制。在识别阶段,我们通过训练有素的NERFS生成的图像来训练回归器,该图像为全球本地化提供了初始值。在姿势优化阶段,我们通过直接优化切线平面上的姿势来最大程度地减少观察到的图像之间的残差和渲染图像。为了避免收敛到局部最优,我们引入了一个截短的动态低通滤波器(TDLF),以进行粗到细小的姿势注册。我们在合成和现实世界中评估了我们的方法,并显示了其在大规模城市场景中高精度导航的潜在应用。代码和数据将在https://github.com/jike5/latitude上公开获取。
translated by 谷歌翻译
尽管在深层视频降级中取得了重大进展,但利用历史和未来框架仍然非常具有挑战性。双向反复网络(BIRNN)在几个视频恢复任务中表现出吸引力的表现。但是,Birnn本质上是离线的,因为它使用向后的复发模块从最后一个帧传播到当前帧,这会导致高潜伏期和大型内存消耗。为了解决Birnn的离线问题,我们提出了一个新颖的经常性网络,该网络由向单向视频DeNoising的前向和观察的经常性模块组成。特别是,look-aver-aph模块是一个精心设计的前向模块,用于利用近距离框架的信息。当降级当前框架时,将隐藏的特征组合出来,并相互反复的模块组合,从而使其可行,可以利用历史和近乎未来的框架。由于不邻近框架之间的现场运动,当从近距离框架到当前框架的扭曲外观功能时,可能会失踪边界像素,这可以通过合并前向翘曲和拟议边框扩大来大大减轻。实验表明,我们的方法通过持续的延迟和记忆消耗实现最先进的性能。代码可在https://github.com/nagejacob/flornn上提供可用。
translated by 谷歌翻译
细粒度的视觉分类(FGVC)旨在识别子类别的对象。由于级细的级别差异,这是一个非常具有挑战性的任务。现有的研究将大型卷积神经网络或视觉变压器应用为特征提取器,这是极其计算昂贵的。实际上,实际的细粒度识别的场景通常需要更轻薄的移动网络可以离线使用。然而,基本移动网络特征提取能力比大规模模型弱。本文基于轻质MobileNetv2,我们提出了一种具有递归马赛克发生器(RMG-PMSI)的逐步多级交互训练方法。首先,我们提出了一种递归马赛克发生器(RMG),其产生不同阶段的不同粒度的图像。然后,不同阶段的特征通过多级相互作用(MSI)模块,其增强和补充不同阶段的相应特征。最后,使用渐进式训练(P),可以充分利用不同阶段中的模型提取的特征并彼此融合。三个着名的细粒度基准测试的实验表明,RMG-PMSI可以显着提高性能,具有良好的稳健性和可转移性。
translated by 谷歌翻译
生成模型的面部匿名化已经变得越来越普遍,因为它们通过生成虚拟面部图像来消毒私人信息,从而确保隐私和图像实用程序。在删除或保护原始身份后,通常无法识别此类虚拟面部图像。在本文中,我们将生成可识别的虚拟面部图像的问题形式化和解决。我们的虚拟脸部图像在视觉上与原始图像不同,以保护隐私保护。此外,它们具有新的虚拟身份,可直接用于面部识别。我们建议可识别的虚拟面部发电机(IVFG)生成虚拟面部图像。 IVFG根据用户特定的键将原始面部图像的潜在矢量投射到虚拟图像中,该键基于该图像生成虚拟面部图像。为了使虚拟面部图像可识别,我们提出了一个多任务学习目标以及一个三联生的培训策略,以学习IVFG。我们使用不同面部图像数据集上的不同面部识别器评估虚拟面部图像的性能,所有这些都证明了IVFG在生成可识别的虚拟面部图像中的有效性。
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译